Last Update: 2025/3/26
SenseFlow Workflow API
The SenseFlow Workflow API allows you to execute and manage workflows. This is suitable for translation, article writing, summarization AI, and other non-session based applications.
Endpoints
Execute Workflow
POST https://platform.llmprovider.ai/v1/agent/workflows/run
Execute a workflow. Cannot be executed without a published workflow.
Request Headers
Header | Value |
---|---|
Authorization | Bearer YOUR_API_KEY |
Content-Type | application/json |
Request Body
Parameter | Type | Required | Description |
---|---|---|---|
model | string | Yes | agent name |
inputs | object | Yes | Key-value pairs of variables defined in the App |
response_mode | string | Yes | streaming (recommended) or blocking |
user | string | Yes | Unique identifier for the end user |
files | array | No | Array of file objects for multimodal inputs |
Files Object Structure
Field | Type | Description |
---|---|---|
type | string | File type (document , image , audio , video , custom ) |
transfer_method | string | remote_url or local_file |
url | string | File URL (when transfer_method is remote_url ) |
upload_file_id | string | File ID (when transfer_method is local_file ) |
Example
{
"inputs": {},
"response_mode": "streaming",
"user": "abc-123"
}
Response
The response varies based on the response_mode
:
- For
blocking
mode: Returns aWorkflowResponse
object - For
streaming
mode: Returns a stream ofChunkWorkflowResponse
objects
WorkflowResponse Structure
Field | Type | Description |
---|---|---|
workflow_run_id | string | Unique ID of workflow execution |
task_id | string | Task ID for request tracking |
data | object | Execution result details |
Data Object Structure
Field | Type | Description |
---|---|---|
id | string | ID of workflow execution |
workflow_id | string | ID of related workflow |
status | string | Status: running /succeeded /failed /stopped |
outputs | object | Content of output |
error | string | Optional reason of error |
elapsed_time | float | Total seconds used |
total_tokens | integer | Tokens used |
total_steps | integer | Total steps executed |
created_at | timestamp | Start time |
finished_at | timestamp | End time |
Streaming Response Events
Each streaming chunk starts with data:
and chunks are separated by \n\n
. Different event types include:
Event Type | Description | Fields |
---|---|---|
workflow_started | Workflow starts execution | task_id, workflow_run_id, data(id, workflow_id, sequence_number, created_at) |
node_started | Node execution started | task_id, workflow_run_id, data(id, node_id, node_type, title, index, predecessor_node_id, inputs, created_at) |
node_finished | Node execution ended | task_id, workflow_run_id, data(id, node_id, node_type, title, index, predecessor_node_id, inputs, process_data, outputs, status, error, elapsed_time, execution_metadata, created_at) |
workflow_finished | workflow execution ended | task_id, workflow_run_id, event, data(id, workflow_id, status, outputs, error, elapsed_time, total_tokens, total_steps, created_at, finished_at) |
error | Stream error event | task_id, message_id, status, code, message |
ping | Keep-alive ping (every 10s) | - |
Example Responses
For blocking mode:
{
"workflow_run_id": "djflajgkldjgd",
"task_id": "9da23599-e713-473b-982c-4328d4f5c78a",
"data": {
"id": "fdlsjfjejkghjda",
"workflow_id": "fldjaslkfjlsda",
"status": "succeeded",
"outputs": {
"text": "Nice to meet you."
},
"error": null,
"elapsed_time": 0.875,
"total_tokens": 3562,
"total_steps": 8,
"created_at": 1705407629,
"finished_at": 1727807631
}
}
For streaming mode:
data: {"event": "workflow_started", "task_id": "5ad4cb98-f0c7-4085-b384-88c403be6290", "workflow_run_id": "5ad498-f0c7-4085-b384-88cbe6290", "data": {"id": "5ad498-f0c7-4085-b384-88cbe6290", "workflow_id": "dfjasklfjdslag", "sequence_number": 1, "created_at": 1679586595}}
data: {"event": "node_started", "task_id": "5ad4cb98-f0c7-4085-b384-88c403be6290", "workflow_run_id": "5ad498-f0c7-4085-b384-88cbe6290", "data": {"id": "5ad498-f0c7-4085-b384-88cbe6290", "node_id": "dfjasklfjdslag", "node_type": "start", "title": "Start", "index": 0, "predecessor_node_id": "fdljewklfklgejlglsd", "inputs": {}, "created_at": 1679586595}}
data: {"event": "node_finished", "task_id": "5ad4cb98-f0c7-4085-b384-88c403be6290", "workflow_run_id": "5ad498-f0c7-4085-b384-88cbe6290", "data": {"id": "5ad498-f0c7-4085-b384-88cbe6290", "node_id": "dfjasklfjdslag", "node_type": "start", "title": "Start", "index": 0, "predecessor_node_id": "fdljewklfklgejlglsd", "inputs": {}, "outputs": {}, "status": "succeeded", "elapsed_time": 0.324, "execution_metadata": {"total_tokens": 63127864, "total_price": 2.378, "currency": "USD"}, "created_at": 1679586595}}
data: {"event": "workflow_finished", "task_id": "5ad4cb98-f0c7-4085-b384-88c403be6290", "workflow_run_id": "5ad498-f0c7-4085-b384-88cbe6290", "data": {"id": "5ad498-f0c7-4085-b384-88cbe6290", "workflow_id": "dfjasklfjdslag", "outputs": {}, "status": "succeeded", "elapsed_time": 0.324, "total_tokens": 63127864, "total_steps": "1", "created_at": 1679586595, "finished_at": 1679976595}}
data: {"event": "tts_message", "conversation_id": "23dd85f3-1a41-4ea0-b7a9-062734ccfaf9", "message_id": "a8bdc41c-13b2-4c18-bfd9-054b9803038c", "created_at": 1721205487, "task_id": "3bf8a0bb-e73b-4690-9e66-4e429bad8ee7", "audio": "qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq"}
data: {"event": "tts_message_end", "conversation_id": "23dd85f3-1a41-4ea0-b7a9-062734ccfaf9", "message_id": "a8bdc41c-13b2-4c18-bfd9-054b9803038c", "created_at": 1721205487, "task_id": "3bf8a0bb-e73b-4690-9e66-4e429bad8ee7", "audio": ""}
Example Request
- Shell
- Python
- Node.js
curl -X POST 'https://platform.llmprovider.ai/v1/agent/workflows/run' \
--header 'Authorization: Bearer YOUR_API_KEY' \
--header 'Content-Type: application/json' \
--data-raw '{
"model": "",
"inputs": {},
"response_mode": "streaming",
"user": "abc-123"
}'
import requests
api_key = 'YOUR_API_KEY'
url = 'https://platform.llmprovider.ai/v1/agent/workflows/run'
headers = {
'Authorization': f'Bearer {api_key}',
'Content-Type': 'application/json'
}
data = {
'model': '',
'inputs': {},
'response_mode': 'streaming',
'user': 'abc-123'
}
response = requests.post(url, headers=headers, json=data)
print(response.json())
const axios = require('axios');
const apiKey = 'YOUR_API_KEY';
const url = 'https://platform.llmprovider.ai/v1/agent/workflows/run';
const data = {
model: '',
inputs: {},
response_mode: 'streaming',
user: 'abc-123'
};
const headers = {
'Authorization': `Bearer ${apiKey}`,
'Content-Type': 'application/json'
};
axios.post(url, data, {headers})
.then(response => console.log(response.data))
.catch(error => console.error(error));
Get Workflow Run Detail
GET https://platform.llmprovider.ai/v1/agent/workflows/run/:workflow_id
Retrieve the execution results of a workflow task based on the workflow execution ID.
Request Headers
Header | Value |
---|---|
Authorization | Bearer YOUR_API_KEY |
Path Parameters
Parameter | Type | Description |
---|---|---|
workflow_id | string | Workflow execution ID |
Query Parameters
Parameter | Type | Description |
---|---|---|
model | string | agent name |
Response
Field | Type | Description |
---|---|---|
id | string | ID of workflow execution |
workflow_id | string | ID of related workflow |
status | string | Status: running /succeeded /failed /stopped |
inputs | object | Content of input |
outputs | object | Content of output |
error | string | Reason of error |
total_steps | integer | Total steps of task |
total_tokens | integer | Total tokens used |
created_at | timestamp | Start time |
finished_at | timestamp | End time |
elapsed_time | float | Total seconds used |
Example Response
{
"id": "b1ad3277-089e-42c6-9dff-6820d94fbc76",
"workflow_id": "19eff89f-ec03-4f75-b0fc-897e7effea02",
"status": "succeeded",
"inputs": "{\"sys.files\": [], \"sys.user_id\": \"abc-123\"}",
"outputs": null,
"error": null,
"total_steps": 3,
"total_tokens": 0,
"created_at": "Thu, 18 Jul 2024 03:17:40 -0000",
"finished_at": "Thu, 18 Jul 2024 03:18:10 -0000",
"elapsed_time": 30.098514399956912
}
Example Request
- Shell
- Python
- Node.js
curl -X GET 'https://platform.llmprovider.ai/v1/agent/workflows/run/workflow_123?model=' \
--header 'Authorization: Bearer YOUR_API_KEY'
import requests
api_key = 'YOUR_API_KEY'
workflow_id = 'workflow_123'
model = ''
url = f'https://platform.llmprovider.ai/v1/agent/workflows/run/{workflow_id}?model={model}'
headers = {
'Authorization': f'Bearer {api_key}'
}
response = requests.get(url, headers=headers)
print(response.json())
const axios = require('axios');
const apiKey = 'YOUR_API_KEY';
const workflowId = 'workflow_123';
const model = '';
const url = `https://platform.llmprovider.ai/v1/agent/workflows/run/${workflowId}?model=${model}`;
const headers = {
'Authorization': `Bearer ${apiKey}`
};
axios.get(url, {headers})
.then(response => console.log(response.data))
.catch(error => console.error(error));
Stop Workflow
POST https://platform.llmprovider.ai/v1/agent/workflows/tasks/:task_id/stop
Stop a running workflow. Only available for streaming mode.
Request Headers
Header | Value |
---|---|
Authorization | Bearer YOUR_API_KEY |
Content-Type | application/json |
Path Parameters
Parameter | Type | Description |
---|---|---|
task_id | string | Task ID obtained from the streaming response |
Request Body Parameters
Parameter | Type | Required | Description |
---|---|---|---|
model | string | Yes | Agent name |
user | string | Yes | User identifier (must match the workflow API user ID) |
Response
Field | Type | Description |
---|---|---|
result | string | Always "success" |
Example Response
{
"result": "success"
}
Example Request
- Shell
- Python
- Node.js
curl -X POST 'https://platform.llmprovider.ai/v1/agent/workflows/tasks/task_123/stop' \
--header 'Authorization: Bearer YOUR_API_KEY' \
--header 'Content-Type: application/json' \
--data-raw '{
"model": "",
"user": "abc-123"
}'
import requests
api_key = 'YOUR_API_KEY'
task_id = 'task_123'
url = f'https://platform.llmprovider.ai/v1/agent/workflows/tasks/{task_id}/stop'
headers = {
'Authorization': f'Bearer {api_key}',
'Content-Type': 'application/json'
}
data = {
'model': '',
'user': 'abc-123'
}
response = requests.post(url, headers=headers, json=data)
print(response.json())
const axios = require('axios');
const apiKey = 'YOUR_API_KEY';
const taskId = 'task_123';
const url = `https://platform.llmprovider.ai/v1/agent/workflows/tasks/${taskId}/stop`;
const data = {
model: '',
user: 'abc-123'
};
const headers = {
'Authorization': `Bearer ${apiKey}`,
'Content-Type': 'application/json'
};
axios.post(url, data, {headers})
.then(response => console.log(response.data))
.catch(error => console.error(error));
Get Workflow Logs
GET https://platform.llmprovider.ai/v1/agent/workflows/logs
Returns workflow execution logs, with pagination support.
Request Headers
Header | Value |
---|---|
Authorization | Bearer YOUR_API_KEY |
Query Parameters
Parameter | Type | Description |
---|---|---|
model | string | Agent name |
keyword | string | Keyword to search |
status | string | Filter by status (succeeded /failed /stopped ) |
page | integer | Current page (default: 1) |
limit | integer | Items per page (default: 20) |
Response
Field | Type | Description |
---|---|---|
page | integer | Current page |
limit | integer | Items per page |
total | integer | Total number of items |
has_more | boolean | Whether there are more pages |
data | array | Array of workflow log objects |
Example Request
- Shell
- Python
- Node.js
curl -X GET 'https://platform.llmprovider.ai/v1/agent/workflows/logs?model=&page=1&limit=20' \
--header 'Authorization: Bearer YOUR_API_KEY'
import requests
api_key = 'YOUR_API_KEY'
url = 'https://platform.llmprovider.ai/v1/agent/workflows/logs'
headers = {
'Authorization': f'Bearer {api_key}'
}
params = {
'model': '',
'page': 1,
'limit': 20
}
response = requests.get(url, headers=headers, params=params)
print(response.json())
const axios = require('axios');
const apiKey = 'YOUR_API_KEY';
const url = 'https://platform.llmprovider.ai/v1/agent/workflows/logs';
const headers = {
'Authorization': `Bearer ${apiKey}`
};
const params = {
model: '',
page: 1,
limit: 20
};
axios.get(url, {headers, params})
.then(response => console.log(response.data))
.catch(error => console.error(error));